Hydraulic infrastructures management and operation are challenging issues. Considering climate changes, risks and uncertainties, it is increasingly important to assure environmental, economic and social sustainability. The use of adequate and efficient tools can contribute with solutions supported on hydrodynamic models, used to simulate the flow of water along river channels and floodplains.
On the other hand, Web-based tools can provide a suitable environment to illustrate the hydrodynamics behavior of a given river under different conditions and to evaluate various scenarios and management strategies. Moreover, this approach can also be considered as an academic resource for learning purposes.
This work presents a Web interface, supported by a Jupyter Notebook, which provides functionalities to configure a hydrodynamic river model and simulate its behavior. The model setup is run through the MIKE 1D numerical simulation engine behind MIKE HYDRO River commercial software.
This approach considers the Mondego River as a case study and allows the use of different datasets for different meteorological conditions, although here, for demonstration purposes, a particular dataset is used. This tool also provides access to real data, such as water levels and streamflows, collected by geosensors located in the considered area.
Throughout this notebook, the successive tasks are explained. Let's get started by loading some Python libraries.
%load_ext autoreload
%autoreload 2
from pathlib import Path
import pandas as pd
import geoviews as gv
import geopandas as gpd
import cartopy
from cartopy import crs as ccrs
import holoviews as hv
import hvplot
import hvplot.pandas
import panel as pn
import param
import src.riverhydrosim.mike1Dsimulation as sim
import src.riverhydrosim.utils as utils
# pandas options
pd.set_option('display.max_rows', 10)
pn.extension()
hv.extension('bokeh')
Define the folders where are the datasets and the river model to run:
MODELS_DIR = "./models/Mondego_Simulacao_CheiaJan2016"
DATA_DIR = "./data"
The Mondego River is the fifth biggest Portuguese river, being the largest river with both headwaters and mouth on Portuguese territory, located in central region of Portugal.
The case study consists of the stretch of Mondego River, between the Aguieira-Raiva dams and the dam bridge of Coimbra City, with an approximately 36.5 km long. The case study considers the two main tributaries, Alva River and Ceira River. The streamflow of the Alva River is controlled by the Fronhas Dam, but the Ceira River streamflow is uncontrolled.
The Mondego River widens substantially around Coimbra City making it a flood-prone area. Since the riparian zones of Coimbra are often affected by river floods at more intense rainfall events, the study of the river hydrodynamics is fundamental for flood forecasting and risk analysis and will help decision-makers to be better prepared for floods.
Some images of past flood events (January 2016) in Coimbra city:
![]() Santa Clara Convent |
![]() Green Park |
river_shp = Path("./shapefiles/Mondego_opt_Branches.shp")
cross_sections_shp = Path("./shapefiles/Mondego_opt_CrossSections.shp")
inflow_points_shp = Path("./shapefiles/inflow_points.shp")
sensors_shp = Path("./shapefiles/sensors.shp")
opts_1 = dict(height=500, width=700, fontsize={"legend": 8, "title": 12}, active_tools=["box_zoom", "pan"])
opts_2 = dict(show_legend=True, apply_ranges=True, tools=["hover"])
# basemap = gv.tile_sources.OSM
basemap = gv.tile_sources.EsriImagery()
# river shape
r = gpd.read_file(str(river_shp), encoding="ISO-8859-1")
river = gv.Path(r, crs=ccrs.GOOGLE_MERCATOR, vdims=[("BR_BrName", "branch_name")],
label="Mondego river").opts(color="blue", line_width=1.5, **opts_2)
# key cross section shapes
cs = gpd.read_file(str(cross_sections_shp), encoding="ISO-8859-1")
key_cs = cs.iloc[[15,16,18,20]]
cross_sections = gv.Path(key_cs, crs=ccrs.GOOGLE_MERCATOR,
vdims=["CS_ID", ("CS_Ch", "CS_chainage")],
label="Cross sections").opts(color="red", line_width=2, **opts_2)
# boundary (inflow) point shapes
inflows = gpd.read_file(str(inflow_points_shp), encoding="ISO-8859-1")
inflow_points = gv.Points(inflows, crs=ccrs.GOOGLE_MERCATOR,
vdims=["Name", ("UDC_Ch", "BR_chainage"), ("UDC_BrName", "branch_name")],
label="Boundaries (inflows)").opts(color="cyan", size=7, **opts_2, legend_position="top_left")
# sensor shapes
s = gpd.read_file(str(sensors_shp), encoding="ISO-8859-1")
sensors = gv.Points(s, crs=ccrs.GOOGLE_MERCATOR,
vdims=[("Code_3", "SE_ID"), ("Name", "name"), ("Code", "SNIRH_code")],
label="Sensors").opts(color="yellow", size=7, **opts_2, legend_position="top_left")
p = basemap.opts(**opts_1) * river * cross_sections * inflow_points * sensors
p.opts(padding=2.0)
There are two options for loading data from geosensors:
Let's load some data observations (water levels), for instance between 2016-01-01 and 2016-01-12, when Coimbra city suffered a severe flood inundation (see images above).
tmin="2016-01-01"
tmax="2016-01-12"
# REMOTE access
args = ("remote", "", tmin, tmax)
# LOCAL access
# args = ("local", Path(DATA_DIR) / "processed/water_level_observ_20160101_20160112_.csv", tmin, tmax)
def load_water_level_observations(source="local", file="", tmin="", tmax=""):
tmin = pd.to_datetime(tmin)
tmin = tmin.strftime(format="%d-%m-%Y")
tmax = pd.to_datetime(tmax)
tmax = tmax.strftime(format="%d-%m-%Y")
try:
if source == "remote":
info = {"code": ["12G/01A", "12G/04H"],
"name_abbrev": ["APC", "PSC"],
"sites": [1627743374, 1627759222],
"pars": [1629599726, 1843],
"ref_level": [0.0, 15.45]
}
url_template = ("https://snirh.apambiente.pt/snirh/_dadosbase/site/paraCSV/"
"dados_csv.php?sites={}&pars={}&tmin={}&tmax={}&formato=csv")
dfs = []
for i in range(len(info["name_abbrev"])):
url = url_template.format(info["sites"][i], info["pars"][i], tmin, tmax)
df_obs = pd.read_csv(url, skiprows=4, encoding="latin-1")
if df_obs.empty:
print("Unable to download water level observations from sensor {}.".format())
else:
# print(url)
df_obs = pd.read_csv(url, skiprows=4, names=["datetime", "water_level"],
usecols=[0,1], encoding="latin-1")
df_obs = df_obs[:-1] # last row is garbage
# format datetime column
df_obs["datetime"] = pd.to_datetime(df_obs["datetime"], format='%d/%m/%Y %H:%M')
df_obs["water_level"] = df_obs["water_level"] + info["ref_level"][i]
df_obs["name_abbrev"] = info["name_abbrev"][i]
df_obs["code"] = info["code"][i]
dfs.append(df_obs)
if dfs: # list not empty
return pd.concat(dfs, ignore_index=True) # append all dataframes in dfs list
else:
return None #pd.DataFrame() # empty dataframe
elif source == "local":
return pd.read_csv(file, usecols=["datetime", "water_level", "name_abbrev", "code"], encoding="latin-1",
parse_dates=["datetime"], infer_datetime_format=True)
except Exception as e:
print(e)
df_obs = load_water_level_observations(*args)
df_obs.rename(columns={"water_level": "water_level (m)", "name_abbrev": "sensor_ID", "code": "SNIRH_code"})
opts = dict(width=700, height=400, fontsize={"legend": 8, "title": 10}, legend_position="top_left")
plot_sensors_data = df_obs.hvplot(x="datetime", y="water_level", ylabel="water level (m)",
title="Geosensor observations: water level", by="name_abbrev",
line_width=1.5, color=["green", "orange"]).opts(**opts)
plot_sensors_data
System inflows are located at:
The inflow time series studied in detail, begin at 2016-01-01 00:00:00 and end at 2016-01-12 23:00:00. They can be loaded from a spreadsheet or a CSV file.
# Data file
data_file = Path(DATA_DIR) / "raw" / "data_mondego_2016.xlsx"
# data_file = Path(DATA_DIR) / "raw" / "data_mondego_2016.csv"
cols = dict(zip(["data_hora", "Q_efluente_RV", "Q_efluente_FR", "Q_ceira_bacia_interm"],
["datetime", "Qin_Aguieira_Raiva", "Qin_Alva", "Qin_Ceira"]))
# inflows dataframe
df_inflows = pd.read_excel(data_file, sheet_name="cheia_jan2016", usecols=cols.keys(),
index_col="data_hora").loc["2016-01-01 00:00:00":"2016-01-12 23:00:00"]
df_inflows.index.rename(cols["data_hora"], inplace=True)
df_inflows_ = df_inflows.rename(columns={k: v + " (m3/s)" for (k, v) in cols.items()}).round(2)
df_inflows.rename(columns=cols, inplace=True)
df_inflows_
df_inflows.describe()
opts = dict(width=700, height=400, fontsize={"legend": 8, "title": 10}, legend_position="top_left")
plot_inflows = df_inflows.hvplot(title="Inflows", xlabel="datetime", ylabel="discharge (m3/s)").opts(**opts)
plot_inflows
In order to run the MIKE 1D numerical simulation engine, the time series inflows must be in a proprietary format - dfs0. So, CSV/spreadsheet files must be converted first.
output_folder = Path(DATA_DIR) / "processed"
output_files = [col + ".dfs0" for col in df_inflows.columns]
for i, f in enumerate(output_files):
dfs0_file = output_folder / f
ds_discharge = df_inflows[output_files[i].split(".")[0]]
utils.convert_discharge_ts_to_dfs0_2(ds_discharge, dfs0_file)
Let's verify if the files were actually created:
! ls ./data/processed/Qin*
A 1D simplified hydrodynamic river model was developed using the software MIKE HYDRO River (not shown in this notebook). This software has a relatively complex graphical interface. This type of model requires the configuration of many parameters and the schematic definition of the fluvial network based on a map.
After the model setup file has been created, it can be run through the MIKE 1D numerical simulation engine behind MIKE HYDRO River.
MIKE 1D is a 1-dimensional numerical simulation engine that simulates water flow in a network (e.g., river, water distribution, storm water drainage and sewer collection networks). The MIKE 1D API provides a high level of interaction and control of the MIKE 1D engine and makes possible to programmatically create or modify a setup and interact with the engine during simulation from user defined code. The same functionality available in the setup editors can be used at runtime.
Here we only want to adjust some of the parameters and analyze the effect on the results - water levels and discharges - in some of the cross sections of the river, those where flooding usually occurs. The key cross sections, located in Coimbra City flood-prone zone, are:
The graphs are interactive and synchronized with each other. The user can exploit different sets of parameters values and quickly inspect the results. The notebook code can be extended to perform other relevant data analysis, such as statistical ones.
class Mike1DSimulation_parameters(param.Parameterized):
# parameterized inputs
dt_min = pd.datetime(2016, 1, 1, 0, 0, 0)
dt_max = pd.datetime(2016, 1, 12, 23, 0, 0)
simulation_start = param.Date(default=pd.datetime(2016, 1, 5, 0, 0, 0),
bounds=(dt_min, dt_max),
doc="Simulation start date/time.")
simulation_end = param.Date(default=pd.datetime(2016, 1, 12, 23, 0, 0),
bounds=(dt_min, dt_max),
doc="Simulation end date/time.")
time_step = param.Integer(default=5, bounds=(1,30), step=1)
time_step_unit = param.ObjectSelector(default="minutes", objects=["minutes", "seconds"])
storing_frequency = param.Integer(default=60, bounds=(1,60), step=5)
storing_frequency_unit = param.ObjectSelector(default="minutes", objects=["minutes", "hours"])
grid_spacing = param.Integer(default=1000, bounds=(50,2000), step=50, doc="Grid spacing in meters.")
delta = param.Number(default=0.5, bounds=(0.5, 1.0), step=0.05, doc="Delta value.")
# button = param.Action(label="Run")
# other inputs
hydro_model_setup_file = str((Path(MODELS_DIR) / "Mondego_opt.mhydro").resolve())
verbose = False
chainages = ('clube_nautico', 'choupalinho', 'pte_sta_clara', 'acude_pte_cbra')
dfs0_aguieira = str((Path(DATA_DIR) / "processed" / "Qin_Aguieira_Raiva.dfs0").resolve())
dfs0_foz_alva = str((Path(DATA_DIR) / "processed" / "Qin_Alva.dfs0").resolve())
dfs0_foz_ceira_baciainterm = str((Path(DATA_DIR) / "processed" / "Qin_Ceira.dfs0").resolve())
@param.depends("time_step_unit", watch=True)
def _update_time_step_bounds(self):
if self.time_step_unit == "minutes":
self.param.time_step.bounds = (1, 30)
self.param.set_param(time_step=5)
elif self.time_step_unit == "seconds":
self.param.time_step.bounds = (1, 60)
self.param.set_param(time_step=30)
@param.depends("storing_frequency_unit", watch=True)
def _update_storing_freq_bounds(self):
if self.storing_frequency_unit == "minutes":
self.param.storing_frequency.bounds = (1, 60)
self.param.storing_frequency.step = 5
self.param.set_param(storing_frequency=60)
elif self.storing_frequency_unit == "hours":
self.param.storing_frequency.bounds = (1, 6)
self.param.storing_frequency.step = 1
self.param.set_param(storing_frequency=1)
# =================================================================================================================
# run MIKE 1D simulation and plot results (water level and discharges in key cross sections)
def run_and_plot_results(self):
try:
# run MIKE 1D simulation and plot results (water level and discharges in key cross sections)
df = sim.run_simulation(hydro_model_setup_file=self.hydro_model_setup_file,
chainages=self.chainages,
simulation_start=str(self.simulation_start),
simulation_end=str(self.simulation_end),
delta=self.delta, grid_spacing=self.grid_spacing,
time_step=(self.time_step, self.time_step_unit),
time_span=(self.storing_frequency, self.storing_frequency_unit),
dfs0_aguieira=self.dfs0_aguieira,
dfs0_foz_alva=self.dfs0_foz_alva,
dfs0_foz_ceira_baciainterm=self.dfs0_foz_ceira_baciainterm,
verbose=self.verbose)
self.results = df
# build water level and discharge charts
opts = dict(width=750, height=350, fontsize={"legend": 8, "title": 10}, legend_position= "top_left")
overtopping_levels = {"CN": 19.050, "CH": 18.840, "PSC": 25.5, "APC": 25.5} # unit: meters
## plot water level results
cols = [col for col in df.columns if "WL_" in col]
labels = [col.split("_")[1] for col in cols]
plot_WL = hv.Overlay([hv.Curve(df[cols[i]], label=labels[i]).opts(tools=["hover"], ylabel="water level (m)") *
hv.HLine(overtopping_levels[labels[i]]).opts(line_width=1.5, line_dash="dotted")
for i in range(len(cols))]).relabel("Results: water level").opts(**opts)
## plot discharge results
cols = [col for col in df.columns if "Q_" in col]
labels = [col.split("_")[1] for col in cols]
plot_Q = hv.Overlay([hv.Curve(df[cols[i]], label=labels[i]).opts(tools=["hover"], ylabel="discharge (m3/s)")
for i in range(len(cols))]).relabel("Results: discharge").opts(**opts)
# return (plot_WL + plot_Q).cols(1)
return plot_WL, plot_Q
except Exception as e:
if sim.Mike1DException:
return (pn.pane.Markdown(str(e), min_width=600, style={'color': 'red'}), None)
else:
raise e
def view(self):
obj1, obj2 = self.run_and_plot_results()
if obj2:
return (obj1 + obj2).cols(1)
else:
return pn.Column(pn.Spacer(height=20), obj1)
simulation = Mike1DSimulation_parameters(name="")
button = pn.widgets.Button(name='Run', button_type='primary', width=280)
def update(event):
gui[1] = simulation.view()
button.param.watch(update, "clicks")
gui = pn.Row(pn.Column(simulation, pn.Spacer(height=50), button), simulation.view()).servable()
gui
# gui.show()
plot_sensors_data.opts(width=500) + plot_inflows.opts(width=500)
df_results = simulation.results
df_results.describe()